7 research outputs found

    Human-robot co-navigation using anticipatory indicators of human walking motion

    Get PDF
    Mobile, interactive robots that operate in human-centric environments need the capability to safely and efficiently navigate around humans. This requires the ability to sense and predict human motion trajectories and to plan around them. In this paper, we present a study that supports the existence of statistically significant biomechanical turn indicators of human walking motions. Further, we demonstrate the effectiveness of these turn indicators as features in the prediction of human motion trajectories. Human motion capture data is collected with predefined goals to train and test a prediction algorithm. Use of anticipatory features results in improved performance of the prediction algorithm. Lastly, we demonstrate the closed-loop performance of the prediction algorithm using an existing algorithm for motion planning within dynamic environments. The anticipatory indicators of human walking motion can be used with different prediction and/or planning algorithms for robotics; the chosen planning and prediction algorithm demonstrates one such implementation for human-robot co-navigation

    An Architecture for Online Affordance-based Perception and Whole-body Planning

    Get PDF
    The DARPA Robotics Challenge Trials held in December 2013 provided a landmark demonstration of dexterous mobile robots executing a variety of tasks aided by a remote human operator using only data from the robot's sensor suite transmitted over a constrained, field-realistic communications link. We describe the design considerations, architecture, implementation and performance of the software that Team MIT developed to command and control an Atlas humanoid robot. Our design emphasized human interaction with an efficient motion planner, where operators expressed desired robot actions in terms of affordances fit using perception and manipulated in a custom user interface. We highlight several important lessons we learned while developing our system on a highly compressed schedule

    Principles and Guidelines for Evaluating Social Robot Navigation Algorithms

    Full text link
    A major challenge to deploying robots widely is navigation in human-populated environments, commonly referred to as social robot navigation. While the field of social navigation has advanced tremendously in recent years, the fair evaluation of algorithms that tackle social navigation remains hard because it involves not just robotic agents moving in static environments but also dynamic human agents and their perceptions of the appropriateness of robot behavior. In contrast, clear, repeatable, and accessible benchmarks have accelerated progress in fields like computer vision, natural language processing and traditional robot navigation by enabling researchers to fairly compare algorithms, revealing limitations of existing solutions and illuminating promising new directions. We believe the same approach can benefit social navigation. In this paper, we pave the road towards common, widely accessible, and repeatable benchmarking criteria to evaluate social robot navigation. Our contributions include (a) a definition of a socially navigating robot as one that respects the principles of safety, comfort, legibility, politeness, social competency, agent understanding, proactivity, and responsiveness to context, (b) guidelines for the use of metrics, development of scenarios, benchmarks, datasets, and simulators to evaluate social navigation, and (c) a design of a social navigation metrics framework to make it easier to compare results from different simulators, robots and datasets.Comment: 43 pages, 11 figures, 6 table

    Fast target prediction of human reaching motion for cooperative human-robot manipulation tasks using time series classification

    No full text
    Interest in human-robot coexistence, in which humans and robots share a common work volume, is increasing in manufacturing environments. Efficient work coordination requires both awareness of the human pose and a plan of action for both human and robot agents in order to compute robot motion trajectories that synchronize naturally with human motion. In this paper, we present a data-driven approach that synthesizes anticipatory knowledge of both human motions and subsequent action steps in order to predict in real-time the intended target of a human performing a reaching motion. Motion-level anticipatory models are constructed using multiple demonstrations of human reaching motions. We produce a library of motions from human demonstrations, based on a statistical representation of the degrees of freedom of the human arm, using time series analysis, wherein each time step is encoded as a multivariate Gaussian distribution. We demonstrate the benefits of this approach through offline statistical analysis of human motion data. The results indicate a considerable improvement over prior techniques in early prediction, achieving 70% or higher correct classification on average for the first third of the trajectory (<; 500msec). We also indicate proof-of-concept through the demonstration of a human-robot cooperative manipulation task performed with a PR2 robot. Finally, we analyze the quality of task-level anticipatory knowledge required to improve prediction performance early in the human motion trajectory

    C-LEARN: Learning geometric constraints from demonstrations for multi-step manipulation in shared autonomy

    No full text
    Learning from demonstrations has been shown to be a successful method for non-experts to teach manipulation tasks to robots. These methods typically build generative models from demonstrations and then use regression to reproduce skills. However, this approach has limitations to capture hard geometric constraints imposed by the task. On the other hand, while sampling and optimization-based motion planners exist that reason about geometric constraints, these are typically carefully hand-crafted by an expert. To address this technical gap, we contribute with C-LEARN, a method that learns multi-step manipulation tasks from demonstrations as a sequence of keyframes and a set of geometric constraints. The system builds a knowledge base for reaching and grasping objects, which is then leveraged to learn multi-step tasks from a single demonstration. C-LEARN supports multi-step tasks with multiple end effectors; reasons about SE(3) volumetric and CAD constraints, such as the need for two axes to be parallel; and offers a principled way to transfer skills between robots with different kinematics. We embed the execution of the learned tasks within a shared autonomy framework, and evaluate our approach by analyzing the success rate when performing physical tasks with a dual-arm Optimas robot, comparing the contribution of different constraints models, and demonstrating the ability of C-LEARN to transfer learned tasks by performing them with a legged dual-arm Atlas robot in simulation

    A Theatrical Mobile-Dexterous Robot Directed through Shared Autonomy

    No full text
    We present the deployment of a 16-DoF dual-arm mobile manipulator as an on-stage actor in the MIT2016 Pageant, a 60 minute live play performed for the centennial celebration of the Massachusetts Institute of Technology campus move from Boston to Cambridge. The robot performed using expressive motions, navigated a 250ft-long thrust stage through a wireless connection, and was directed remotely by a human operator using a shared autonomy system. We report on the technical framework and human-robot interaction that enabled the performance, including motion planning, coordination of action with human actors, and the challenges in navigation, manipulation, perception and system reliability

    Director: A User Interface Designed for Robot Operation with Shared Autonomy

    No full text
    Operating a high degree of freedom mobile manipulator, such as a humanoid, in a field scenario requires constant situational awareness, capable perception modules, and effective mechanisms for interactive motion planning and control. A well-designed operator interface presents the operator with enough context to quickly carry out a mission and the flexibility to handle unforeseen operating scenarios robustly. By contrast, an unintuitive user interface can increase the risk of catastrophic operator error by overwhelming the user with unnecessary information. With these principles in mind, we present the philosophy and design decisions behind Director—the open-source user interface developed by Team MIT to pilot the Atlas robot in the DARPA Robotics Challenge (DRC). At the heart of Director is an integrated task execution system that specifies sequences of actions needed to achieve a substantive task, such as drilling a wall or climbing a staircase. These task sequences, developed a priori, make online queries to automated perception and planning algorithms with outputs that can be reviewed by the operator and executed by our whole-body controller. Our use of Director at the DRC resulted in efficient high-level task operation while being fully competitive with approaches focusing on teleoperation by highly trained operators. We discuss the primary interface elements that comprise Director, and we provide an analysis of its successful use at the DRC.United States. Defense Advanced Research Projects Agency. (Air Force Research Laboratory (award FA8750-12-1-0321))United States. Office of Naval Research (Award N00014-12-1-0071
    corecore